71 research outputs found

    Gray-level co-occurrence matrix bone fracture detection

    Get PDF
    Problem statement: Currently doctors in orthopedic wards inspect the bone x-ray images according to their experience and knowledge in bone fracture analysis. Manual examination of x-rays has multitude drawbacks. The process is time-consuming and subjective. Approach: Since detection of fractures is an important orthopedics and radiologic problem and therefore a Computer Aided Detection(CAD) system should be developed to improve the scenario. In this study, a fracture detection CAD based on GLCM recognition could improve the current manual inspection of x-ray images system. The GLCM for fracture and non-fracture bone is computed and analysis is made. Features of Homogeneity, contrast, energy, correlation are calculated to classify the fractured bone. Results: 30 images of femur fractures have been tested, the result shows that the CAD system can differentiate the x-ray bone into fractured and nonfractured femur. The accuracy obtained from the system is 86.67. Conclusion: The CAD system is proved to be effective in classifying the digital radiograph of bone fracture. However the accuracy rate is not perfect, the performance of this system can be further improved using multiple features of GLCM and future works can be done on classifying the bone into different degree of fracture specifically

    Multipurpose contrast enhancement on epiphyseal plates and ossification centers for bone age assessment

    Get PDF
    BACKGROUND: The high variations of background luminance, low contrast and excessively enhanced contrast of hand bone radiograph often impede the bone age assessment rating system in evaluating the degree of epiphyseal plates and ossification centers development. The Global Histogram equalization (GHE) has been the most frequently adopted image contrast enhancement technique but the performance is not satisfying. A brightness and detail preserving histogram equalization method with good contrast enhancement effect has been a goal of much recent research in histogram equalization. Nevertheless, producing a well-balanced histogram equalized radiograph in terms of its brightness preservation, detail preservation and contrast enhancement is deemed to be a daunting task. METHOD: In this paper, we propose a novel framework of histogram equalization with the aim of taking several desirable properties into account, namely the Multipurpose Beta Optimized Bi-Histogram Equalization (MBOBHE). This method performs the histogram optimization separately in both sub-histograms after the segmentation of histogram using an optimized separating point determined based on the regularization function constituted by three components. The result is then assessed by the qualitative and quantitative analysis to evaluate the essential aspects of histogram equalized image using a total of 160 hand radiographs that are implemented in testing and analyses which are acquired from hand bone online database. RESULT: From the qualitative analysis, we found that basic bi-histogram equalizations are not capable of displaying the small features in image due to incorrect selection of separating point by focusing on only certain metric without considering the contrast enhancement and detail preservation. From the quantitative analysis, we found that MBOBHE correlates well with human visual perception, and this improvement shortens the evaluation time taken by inspector in assessing the bone age. CONCLUSIONS: The proposed MBOBHE outperforms other existing methods regarding comprehensive performance of histogram equalization. All the features which are pertinent to bone age assessment are more protruding relative to other methods; this has shorten the required evaluation time in manual bone age assessment using TW method. While the accuracy remains unaffected or slightly better than using unprocessed original image. The holistic properties in terms of brightness preservation, detail preservation and contrast enhancement are simultaneous taken into consideration and thus the visual effect is contributive to manual inspection

    In-socket sensory system with an adaptive neuro-based fuzzy inference system for active transfemoral prosthetic legs

    Get PDF
    An in-socket sensory system enables the monitoring of transfemoral amputee movement for a microprocessor-controlled prosthetic leg. User movement recognition from an in-socket sensor allows a powered prosthetic leg to actively mimic healthy ambulation, thereby reducing an amputee's metabolic energy consumption. This study established an adaptive neurofuzzy inference system (ANFIS)-based control input framework from an in-socket sensor signal for gait phase classification to derive user intention as read by in-socket sensor arrays. Particular gait phase recognition was mapped with the cadence and torque control output of a knee joint actuator. The control input framework was validated with 30 experimental gait samples of the in-socket sensory signal of a transfemoral amputee walking at fluctuating speeds of 0 to 2 km · h- 1. The physical simulation of the controller presented a realistic simulation of the actuated knee joint in terms of a knee mechanism with 95% to 99% accuracy of knee cadence and 80% to 90% accuracy of torque compared with those of normal gait. The ANFIS system successfully detected the seven gait phases based on the amputee's in-socket sensor signals and assigned accurate knee joint torque and cadence values as output. © 2018 SPIE and IS&T

    An artifacts removal post-processing for epiphyseal region-of-interest (EROI) localization in automated bone age assessment (BAA)

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Segmentation is the most crucial part in the computer-aided bone age assessment. A well-known type of segmentation performed in the system is adaptive segmentation. While providing better result than global thresholding method, the adaptive segmentation produces a lot of unwanted noise that could affect the latter process of epiphysis extraction.</p> <p>Methods</p> <p>A proposed method with anisotropic diffusion as pre-processing and a novel Bounded Area Elimination (BAE) post-processing algorithm to improve the algorithm of ossification site localization technique are designed with the intent of improving the adaptive segmentation result and the region-of interest (ROI) localization accuracy.</p> <p>Results</p> <p>The results are then evaluated by quantitative analysis and qualitative analysis using texture feature evaluation. The result indicates that the image homogeneity after anisotropic diffusion has improved averagely on each age group for 17.59%. Results of experiments showed that the smoothness has been improved averagely 35% after BAE algorithm and the improvement of ROI localization has improved for averagely 8.19%. The MSSIM has improved averagely 10.49% after performing the BAE algorithm on the adaptive segmented hand radiograph.</p> <p>Conclusions</p> <p>The result indicated that hand radiographs which have undergone anisotropic diffusion have greatly reduced the noise in the segmented image and the result as well indicated that the BAE algorithm proposed is capable of removing the artifacts generated in adaptive segmentation.</p

    Editorial: Emerging applications of text analytics and natural language processing in healthcare

    Get PDF
    WOS:001033976100001Text analytics and natural language processing (NLP) have emerged as powerful tools in healthcare, revolutionizing patient care, clinical research, and public health administration. Over the years, as healthcare databases expand exponentially, healthcare providers, pharmaceutical and biotech industries are utilizing both tools to enhance patient outcome

    Transfer learning-assisted 3D deep learning models for knee osteoarthritis detection: Data from the osteoarthritis initiative

    Get PDF
    Knee osteoarthritis is one of the most common musculoskeletal diseases and is usually diagnosed with medical imaging techniques. Conventionally, case identification using plain radiography is practiced. However, we acknowledge that knee osteoarthritis is a 3D complexity; hence, magnetic resonance imaging will be the ideal modality to reveal the hidden osteoarthritis features from a three-dimensional view. In this work, the feasibility of well-known convolutional neural network (CNN) structures (ResNet, DenseNet, VGG, and AlexNet) to distinguish knees with and without osteoarthritis (OA) is investigated. Using 3D convolutional layers, we demonstrated the potential of 3D convolutional neural networks of 13 different architectures in knee osteoarthritis diagnosis. We used transfer learning by transforming 2D pre-trained weights into 3D as initial weights for the training of the 3D models. The performance of the models was compared and evaluated based on the performance metrics [balanced accuracy, precision, F1 score, and area under receiver operating characteristic (AUC) curve]. This study suggested that transfer learning indeed enhanced the performance of the models, especially for ResNet and DenseNet models. Transfer learning-based models presented promising results, with ResNet34 achieving the best overall accuracy of 0.875 and an F1 score of 0.871. The results also showed that shallow networks yielded better performance than deeper neural networks, demonstrated by ResNet18, DenseNet121, and VGG11 with AUC values of 0.945, 0.914, and 0.928, respectively. This encourages the application of clinical diagnostic aid for knee osteoarthritis using 3DCNN even in limited hardware conditions

    Predicting occupational injury causal factors using text-based analytics : A systematic review

    Get PDF
    Workplace accidents can cause a catastrophic loss to the company including human injuries and fatalities. Occupational injury reports may provide a detailed description of how the incidents occurred. Thus, the narrative is a useful information to extract, classify and analyze occupational injury. This study provides a systematic review of text mining and Natural Language Processing (NLP) applications to extract text narratives from occupational injury reports. A systematic search was conducted through multiple databases including Scopus, PubMed, and Science Direct. Only original studies that examined the application of machine and deep learning-based Natural Language Processing models for occupational injury analysis were incorporated in this study. A total of 27, out of 210 articles were reviewed in this study by adopting the Preferred Reporting Items for Systematic Review (PRISMA). This review highlighted that various machine and deep learning-based NLP models such as K-means, Naïve Bayes, Support Vector Machine, Decision Tree, and K-Nearest Neighbors were applied to predict occupational injury. On top of these models, deep neural networks are also included in classifying the type of accidents and identifying the causal factors. However, there is a paucity in using the deep learning models in extracting the occupational injury reports. This is due to these techniques are pretty much very recent and making inroads into decision-making in occupational safety and health as a whole. Despite that, this paper believed that there is a huge and promising potential to explore the application of NLP and text-based analytics in this occupational injury research field. Therefore, the improvement of data balancing techniques and the development of an automated decision-making support system for occupational injury by applying the deep learning-based NLP models are the recommendations given for future research

    Three dimensional nuchal translucency ultrasound segmentation using region growing for trisomy 21 early assessment

    Get PDF
    Ultrasound prenatal screening has been proposed as a most effective technique for trisomy 21 early assessment. The current practice using B mode conventional ultrasonic images are restricted inter and intra observer variability. Therefore, we proposed three dimensional segmentation techniques for ultrasound marker, nuchal translucency (NT), as a replacement method to existing manual two dimensional NT thickness measurements. The developed generic computing algorithms are integrated with VTK and ITK open-sources libraries. Region growing was implemented with growth criteria and rendered by reconstructed multiplanar view. The findings have proven that the developed algorithm was able to produce consistent three dimensional NT segmentation

    Surface rendering of three dimensional ultrasound images using VTK

    Get PDF
    This study utilized VTK visualization tool to demonstrate reconstruction of medical image using surface rendering technique. Simulation result shows three dimensional surface rendering of multiple slices ultrasound fetal phantom images in orthogonal view, upper left and upper right for reconstructed coronal and sagittal images and also internal view by changing position. A three dimensional reconstruction using surface rendering via VTK library is accomplished and it is found effective in terms of cost and performance in medical image
    corecore